60 research outputs found

    Baxter permutations rise again

    Get PDF
    AbstractBaxter permutations, so named by Boyce, were introduced by Baxter in his study of the fixed points of continuous functions which commute under composition. Recently Chung, Graham, Hoggatt, and Kleiman obtained a sum formula for the number of Baxter permutations of 2n − 1 objects, but admit to having no interpretation of the individual terms of this sum. We show that in fact the kth term of this sum counts the number of (reduced) Baxter permutations that have exactly k − 1 rises

    Enumerating nested and consecutive partitions

    Get PDF

    Weight enumerators of self-orthogonal codes

    Get PDF
    AbstractCanonical forms are given for (i) the weight enumerator of an |n, 12(n−1)| self-orthogonal code, and (ii) the split weight enumerator (which classifies the codewords according to the weight of the left-and right-half words) of an |n, 12n| self-dual code

    Evaluating observed versus predicted forest biomass: R-squared, index of agreement or maximal information coefficient?

    Get PDF
    The accurate prediction of forest above-ground biomass is nowadays key to implementing climate change mitigation policies, such as reducing emissions from deforestation and forest degradation. In this context, the coefficient of determination (R2{R^2}) is widely used as a means of evaluating the proportion of variance in the dependent variable explained by a model. However, the validity of R2{R^2} for comparing observed versus predicted values has been challenged in the presence of bias, for instance in remote sensing predictions of forest biomass. We tested suitable alternatives, e.g. the index of agreement (dd) and the maximal information coefficient (MICMIC). Our results show that dd renders systematically higher values than R2{R^2}, and may easily lead to regarding as reliable models which included an unrealistic amount of predictors. Results seemed better for MICMIC, although MICMIC favoured local clustering of predictions, whether or not they corresponded to the observations. Moreover, R2{R^2} was more sensitive to the use of cross-validation than dd or MICMIC, and more robust against overfitted models. Therefore, we discourage the use of statistical measures alternative to R2{R^2} for evaluating model predictions versus observed values, at least in the context of assessing the reliability of modelled biomass predictions using remote sensing. For those who consider dd to be conceptually superior to R2{R^2}, we suggest using its square d2{d^2}, in order to be more analogous to R2{R^2} and hence facilitate comparison across studies

    A dimensionally continued Poisson summation formula

    Full text link
    We generalize the standard Poisson summation formula for lattices so that it operates on the level of theta series, allowing us to introduce noninteger dimension parameters (using the dimensionally continued Fourier transform). When combined with one of the proofs of the Jacobi imaginary transformation of theta functions that does not use the Poisson summation formula, our proof of this generalized Poisson summation formula also provides a new proof of the standard Poisson summation formula for dimensions greater than 2 (with appropriate hypotheses on the function being summed). In general, our methods work to establish the (Voronoi) summation formulae associated with functions satisfying (modular) transformations of the Jacobi imaginary type by means of a density argument (as opposed to the usual Mellin transform approach). In particular, we construct a family of generalized theta series from Jacobi theta functions from which these summation formulae can be obtained. This family contains several families of modular forms, but is significantly more general than any of them. Our result also relaxes several of the hypotheses in the standard statements of these summation formulae. The density result we prove for Gaussians in the Schwartz space may be of independent interest.Comment: 12 pages, version accepted by JFAA, with various additions and improvement

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Stochastic particle packing with specified granulometry and porosity

    Full text link
    This work presents a technique for particle size generation and placement in arbitrary closed domains. Its main application is the simulation of granular media described by disks. Particle size generation is based on the statistical analysis of granulometric curves which are used as empirical cumulative distribution functions to sample from mixtures of uniform distributions. The desired porosity is attained by selecting a certain number of particles, and their placement is performed by a stochastic point process. We present an application analyzing different types of sand and clay, where we model the grain size with the gamma, lognormal, Weibull and hyperbolic distributions. The parameters from the resulting best fit are used to generate samples from the theoretical distribution, which are used for filling a finite-size area with non-overlapping disks deployed by a Simple Sequential Inhibition stochastic point process. Such filled areas are relevant as plausible inputs for assessing Discrete Element Method and similar techniques

    Baxter permutations rise again

    No full text

    Crowdordering

    No full text
    corecore